17 research outputs found
Joint routing and resource allocation for wireless backhauling of small cell networks
The future communication networks are destined to support an increasingly large amount of data traffic, and for that reason, efficient mechanisms to manage them are necessary. Based on a backhaul network, and starting from specific scenarios, we develop methods to jointly optimize the routing parameters and resources of this network. We relate this optimization with the Software Defined Networks and network virtualization concepts, which allow us to have an overall vision of the network, and lead us to study its decomposition. To do this, we use convex optimization techniques, which have very efficient resolution mechanisms, and help us to obtain tools for interpreting the obtained results and perform analysis on the network parameters. The achieved results show a great improvement in relation to the non-optimized case in terms of carried traffic, which is an assessment we make in the final economic analysis.Las redes de comunicaciones del futuro están destinadas a soportar una cantidad de tráfico de datos cada vez más elevada, y por eso son necesarios mecanismos eficientes para gestionarlas. Basándonos en una red de backhaul y partiendo de escenarios concretos, desarrollamos métodos para optimizar conjuntamente los parámetros de enrutamiento y los recursos de esta red. Esta optimización la ligamos con los conceptos de Software Defined Networksk y de network virtualization, que nos permiten tener una visión general de la red, y nos conducen a estudiar su descomposición. Esto lo hacemos usando técnicas de optimización convexa, que tiene mecanismos de resolución muy eficientes, y nos ayuda a obtener herramientas para interpretar los resultados obtenidos y hacer análisis de los parámetros de la red. Los resultados conseguidos muestran una gran mejora con relación al caso no optimizado en términos de tráfico transportado, valoración que recogemos en un análisis económico final.Les xarxes de comunicacions del futur estan destinades a suportar una quantitat de trà nsit de dades cada cop més elevada, i per això són necessaris mecanismes eficients per a gestionar-les. Basant-nos en una xarxa de backhaul i partint d?escenaris concrets, desenvolupem mètodes per a optimitzar conjuntament els parà metres d?encaminament i els recursos d?aquesta xarxa. Aquesta optimització la lliguem amb els conceptes de Software Defined Network i de network virtualization, que ens permeten tenir una visió general de la xarxa, i ens condueixen a estudiar-ne la seva descomposició. Tot això ho fem utilitzant tècniques d?optimització convexa, que té mecanismes de resolució molt eficients, i ens ajuda a obtenir eines per a interpretar els resultats obtinguts i fer anà lisis dels punts forts i febles de la xarxa. Els resultats aconseguits mostren una gran millora respecte el cas no optimitzat en termes de trà nsit transportat, valoració que recollim en una anà lisi econòmica final
Representing Spatial Trajectories as Distributions
We introduce a representation learning framework for spatial trajectories. We
represent partial observations of trajectories as probability distributions in
a learned latent space, which characterize the uncertainty about unobserved
parts of the trajectory. Our framework allows us to obtain samples from a
trajectory for any continuous point in time, both interpolating and
extrapolating. Our flexible approach supports directly modifying specific
attributes of a trajectory, such as its pace, as well as combining different
partial observations into single representations. Experiments show our method's
advantage over baselines in prediction tasks.Comment: Accepted to NeurIPS 202
FLEX: Full-Body Grasping Without Full-Body Grasps
Synthesizing 3D human avatars interacting realistically with a scene is an
important problem with applications in AR/VR, video games and robotics. Towards
this goal, we address the task of generating a virtual human -- hands and full
body -- grasping everyday objects. Existing methods approach this problem by
collecting a 3D dataset of humans interacting with objects and training on this
data. However, 1) these methods do not generalize to different object positions
and orientations, or to the presence of furniture in the scene, and 2) the
diversity of their generated full-body poses is very limited. In this work, we
address all the above challenges to generate realistic, diverse full-body
grasps in everyday scenes without requiring any 3D full-body grasping data. Our
key insight is to leverage the existence of both full-body pose and hand
grasping priors, composing them using 3D geometrical constraints to obtain
full-body grasps. We empirically validate that these constraints can generate a
variety of feasible human grasps that are superior to baselines both
quantitatively and qualitatively. See our webpage for more details:
https://flex.cs.columbia.edu/
ViperGPT: Visual Inference via Python Execution for Reasoning
Answering visual queries is a complex task that requires both visual
processing and reasoning. End-to-end models, the dominant approach for this
task, do not explicitly differentiate between the two, limiting
interpretability and generalization. Learning modular programs presents a
promising alternative, but has proven challenging due to the difficulty of
learning both the programs and modules simultaneously. We introduce ViperGPT, a
framework that leverages code-generation models to compose vision-and-language
models into subroutines to produce a result for any query. ViperGPT utilizes a
provided API to access the available modules, and composes them by generating
Python code that is later executed. This simple approach requires no further
training, and achieves state-of-the-art results across various complex visual
tasks.Comment: Website: https://viper.cs.columbia.edu
How concepts emerge in neural networks
To be defined at MIT.Deep learning models, and more specifically computer vision systems, have achieved great results in recent years. However, the interpretability and understanding of these models is still in its early stages. Interpretability can be approached from a low-level or filter level perspective, but the representations learned by neural networks encompass a much higher-level knowledge that has to be approached from a semantic point of view, with concepts in mind. The goal of this project is to investigate the concepts neural networks learn implicitly when they are trained in an unsupervised scenario, with a special focus on the multimodal matching of words to visual objects and attributes. We study how we can detect these concepts, as well as how we can force the networks to learn more meaningful ones, both providing analytical insights and getting practical results
How concepts emerge in neural networks
To be defined at MIT.Deep learning models, and more specifically computer vision systems, have achieved great results in recent years. However, the interpretability and understanding of these models is still in its early stages. Interpretability can be approached from a low-level or filter level perspective, but the representations learned by neural networks encompass a much higher-level knowledge that has to be approached from a semantic point of view, with concepts in mind. The goal of this project is to investigate the concepts neural networks learn implicitly when they are trained in an unsupervised scenario, with a special focus on the multimodal matching of words to visual objects and attributes. We study how we can detect these concepts, as well as how we can force the networks to learn more meaningful ones, both providing analytical insights and getting practical results